109 research outputs found

    On Byzantine Broadcast in Loosely Connected Networks

    Full text link
    We consider the problem of reliably broadcasting information in a multihop asynchronous network that is subject to Byzantine failures. Most existing approaches give conditions for perfect reliable broadcast (all correct nodes deliver the authentic message and nothing else), but they require a highly connected network. An approach giving only probabilistic guarantees (correct nodes deliver the authentic message with high probability) was recently proposed for loosely connected networks, such as grids and tori. Yet, the proposed solution requires a specific initialization (that includes global knowledge) of each node, which may be difficult or impossible to guarantee in self-organizing networks - for instance, a wireless sensor network, especially if they are prone to Byzantine failures. In this paper, we propose a new protocol offering guarantees for loosely connected networks that does not require such global knowledge dependent initialization. In more details, we give a methodology to determine whether a set of nodes will always deliver the authentic message, in any execution. Then, we give conditions for perfect reliable broadcast in a torus network. Finally, we provide experimental evaluation for our solution, and determine the number of randomly distributed Byzantine failures than can be tolerated, for a given correct broadcast probability.Comment: 1

    Enhancing efficiency of Byzantine-tolerant coordination protocols via hash functions

    Get PDF
    Abstract. Distributed protocols resilient to Byzantine failures are notorious to be costly from the computational and communication point of view. In this paper we discuss the role that collision–resistant hash functions can have in enhancing the efficiency of Byzantine–tolerant coordination protocols. In particular, we show two settings in which their use leads to a remarkable improvement of the system performance in case of large data or large populations. More precisely, we show how they can be applied to the implementation of atomic shared objects, and propose a technique that combines randomization and hash functions. We discuss also the earnings of these approaches and compute their complexity.

    A Scalable Byzantine Grid

    Full text link
    Modern networks assemble an ever growing number of nodes. However, it remains difficult to increase the number of channels per node, thus the maximal degree of the network may be bounded. This is typically the case in grid topology networks, where each node has at most four neighbors. In this paper, we address the following issue: if each node is likely to fail in an unpredictable manner, how can we preserve some global reliability guarantees when the number of nodes keeps increasing unboundedly ? To be more specific, we consider the problem or reliably broadcasting information on an asynchronous grid in the presence of Byzantine failures -- that is, some nodes may have an arbitrary and potentially malicious behavior. Our requirement is that a constant fraction of correct nodes remain able to achieve reliable communication. Existing solutions can only tolerate a fixed number of Byzantine failures if they adopt a worst-case placement scheme. Besides, if we assume a constant Byzantine ratio (each node has the same probability to be Byzantine), the probability to have a fatal placement approaches 1 when the number of nodes increases, and reliability guarantees collapse. In this paper, we propose the first broadcast protocol that overcomes these difficulties. First, the number of Byzantine failures that can be tolerated (if they adopt the worst-case placement) now increases with the number of nodes. Second, we are able to tolerate a constant Byzantine ratio, however large the grid may be. In other words, the grid becomes scalable. This result has important security applications in ultra-large networks, where each node has a given probability to misbehave.Comment: 17 page

    How Many Cooks Spoil the Soup?

    Get PDF
    In this work, we study the following basic question: "How much parallelism does a distributed task permit?" Our definition of parallelism (or symmetry) here is not in terms of speed, but in terms of identical roles that processes have at the same time in the execution. We initiate this study in population protocols, a very simple model that not only allows for a straightforward definition of what a role is, but also encloses the challenge of isolating the properties that are due to the protocol from those that are due to the adversary scheduler, who controls the interactions between the processes. We (i) give a partial characterization of the set of predicates on input assignments that can be stably computed with maximum symmetry, i.e., Θ(Nmin)\Theta(N_{min}), where NminN_{min} is the minimum multiplicity of a state in the initial configuration, and (ii) we turn our attention to the remaining predicates and prove a strong impossibility result for the parity predicate: the inherent symmetry of any protocol that stably computes it is upper bounded by a constant that depends on the size of the protocol.Comment: 19 page

    Decentralization in Bitcoin and Ethereum Networks

    Full text link
    Blockchain-based cryptocurrencies have demonstrated how to securely implement traditionally centralized systems, such as currencies, in a decentralized fashion. However, there have been few measurement studies on the level of decentralization they achieve in practice. We present a measurement study on various decentralization metrics of two of the leading cryptocurrencies with the largest market capitalization and user base, Bitcoin and Ethereum. We investigate the extent of decentralization by measuring the network resources of nodes and the interconnection among them, the protocol requirements affecting the operation of nodes, and the robustness of the two systems against attacks. In particular, we adapted existing internet measurement techniques and used the Falcon Relay Network as a novel measurement tool to obtain our data. We discovered that neither Bitcoin nor Ethereum has strictly better properties than the other. We also provide concrete suggestions for improving both systems.Comment: Financial Cryptography and Data Security 201

    Block-STM Scaling Blockchain Execution by Turning Ordering Curse to a Performance Blessing

    Get PDF
    Block-STM is a parallel execution engine for smart contracts, built around the principles of Software Transactional Memory. Transactions are grouped in blocks, and every execution of the block must yield the same deterministic outcome. Block-STM further enforces that the outcome is consistent with executing transactions according to a preset order, leveraging this order to dynamically detect dependencies and avoid conflicts during speculative transaction execution. At the core of Block-STM is a novel, low-overhead collaborative scheduler of execution and validation tasks. Block-STM is implemented on the main branch of the Diem Blockchain code-base and runs in production at Aptos. Our evaluation demonstrates that Block-STM is adaptive to workloads with different conflict rates and utilizes the inherent parallelism therein. Block-STM achieves up to 110k tps in the Diem benchmarks and up to 170k tps in the Aptos Benchmarks, which is a 20x and 17x improvement over the sequential baseline with 32 threads, respectively. The throughput on a contended workload is up to 50k tps and 80k tps in Diem and Aptos benchmarks, respectively

    Structural cloud audits that protect private information

    Full text link
    As organizations and individuals have begun to rely more and more heavily on cloud-service providers for critical tasks, cloud-service reliability has become a top priority. It is natural for cloud-service providers to use redundancy to achieve reliability. For example, a provider may replicate critical state in two data centers. If the two data centers use the same power supply, however, then a power out-age will cause them to fail simultaneously; replication per se does not, therefore, enable the cloud-service provider to make strong reliability guarantees to its users. Zhai et al. [28] present a sys-tem, which they refer to as a structural-reliability auditor (SRA), that uncovers common dependencies in seemingly disjoint cloud-infrastructural components (such as the power supply in the exam-ple above) and quantifies the risks that they pose. In this paper, we focus on the need for structural-reliability auditing to be done in a privacy-preserving manner. We present a privacy-preserving structural-reliability auditor (P-SRA), discuss its privacy proper-ties, and evaluate a prototype implementation built on the Share-mind SecreC platform [6]. P-SRA is an interesting application of secure multi-party computation (SMPC), which has not often been used for graph problems. It can achieve acceptable running times even on large cloud structures by using a novel data-partitioning technique that may be useful in other applications of SMPC

    Efficient state-based CRDTs by delta-mutation

    Get PDF
    CRDTs are distributed data types that make eventual consistency of a distributed object possible and non ad-hoc. Specifically, state-based CRDTs ensure convergence through disseminating the entire state, that may be large, and merging it to other replicas; whereas operation-based CRDTs disseminate operations (i.e., small states) assuming an exactly-once reliable dissemination layer. We introduce Delta State Conflict-Free Replicated Datatypes (δ-CRDT) that can achieve the best of both worlds: small messages with an incremental nature, disseminated over unreliable communication channels. This is achieved by defining δ-mutators to return a delta-state, typically with a much smaller size than the full state, that is joined to both: local and remote states. We introduce the δ-CRDT framework, and we explain it through establishing a correspondence to current state-based CRDTs. In addition, we present an anti-entropy algorithm that ensures causal consistency, and two δ-CRDT specifications of well-known replicated datatypes.This work is co-financed by the North Portugal Regional Operational Programme (ON.2, O Novo Norte), under the National Strategic Reference Framework (NSRF), through the European Regional Development Fund (ERDF), within project NORTE07-0124-FEDER-000058; and by EU FP7 SyncFree project (609551)

    Estimating seasonal abundance of a central place forager using counts and telemetry data

    Get PDF
    R.J.S. was supported by a Natural Environment Research Council studentship.Obtaining population estimates of species that are not easily observed directly can be problematic. However, central place foragers can often be observed some of the time, e.g. when seals are hauled out. In these instances, population estimates can be derived from counts, combined with information on the proportion of time that animals can be observed. We present a modelling framework to estimate seasonal absolute abundance using counts and information from satellite telemetry data. The method was tested on a harbour seal population in an area of southeast Scotland. Counts were made monthly, between November 2001 and June 2003, when seals were hauled out on land and were corrected for the proportion of time the seals were at sea using satellite telemetry. Harbour seals (n=25) were tagged with satellite relay data loggers between November 2001 and March 2003. To estimate the proportion of time spent hauled out, time at sea on foraging trips was modelled separately from haul-out behaviour close to haul-out sites because of the different factors affecting these processes. A generalised linear mixed model framework was developed to capture the longitudinal nature of the data and the repeated measures across individuals. Despite seasonal variability in the number of seals counted at haul-out sites, the model generated estimates of abundance, with an overall mean of 846 (95% CI: 767 to 979). The methodology shows the value of using count and telemetry data collected concurrently for estimating absolute abundance, information that is essential to assess interactions between predators, fish stocks and fisheries.Publisher PDFPeer reviewe
    corecore